36 research outputs found
Achieving Reliable Coordination of Residential Plug-in Electric Vehicle Charging: A Pilot Study
Wide-scale electrification of the transportation sector will require careful
planning and coordination with the power grid. Left unmanaged, uncoordinated
charging of electric vehicles (EVs) at increased levels of penetration will
amplify existing peak loads, potentially outstripping the grid's capacity to
reliably meet demand. In this paper, we report findings from the OptimizEV
Project - a real-world pilot study in Upstate New York exploring a novel
approach to coordinated residential EV charging. The proposed coordination
mechanism seeks to harness the latent flexibility in EV charging by offering EV
owners monetary incentives to delay the time required to charge their EVs. Each
time an EV owner initiates a charging session, they specify how long they
intend to leave their vehicle plugged in by selecting from a menu of deadlines
that offers lower electricity prices the longer they're willing to delay the
time required to charge their EV. Given a collection of active charging
requests, a smart charging system dynamically optimizes the power being drawn
by each EV in real time to minimize strain on the grid, while ensuring that
each customer's car is fully charged by its deadline. Under the proposed
incentive mechanism, we find that customers are frequently willing to engage in
optimized charging sessions, allowing the system to delay the completion of
their charging requests by more than eight hours on average. Using the
flexibility provided by customers, the smart charging system was shown to be
highly effective in shifting the majority of EV charging loads off-peak to fill
the night-time valley of the aggregate load curve. Customer opt-in rates
remained stable over the span of the study, providing empirical evidence in
support of the proposed coordination mechanism as a potentially viable
"non-wires alternative" to meet the increased demand for electricity driven
growing EV adoption.Comment: 19 pages, 12 figure
Safe Linear Stochastic Bandits
We introduce the safe linear stochastic bandit framework---a generalization
of linear stochastic bandits---where, in each stage, the learner is required to
select an arm with an expected reward that is no less than a predetermined
(safe) threshold with high probability. We assume that the learner initially
has knowledge of an arm that is known to be safe, but not necessarily optimal.
Leveraging on this assumption, we introduce a learning algorithm that
systematically combines known safe arms with exploratory arms to safely expand
the set of safe arms over time, while facilitating safe greedy exploitation in
subsequent stages. In addition to ensuring the satisfaction of the safety
constraint at every stage of play, the proposed algorithm is shown to exhibit
an expected regret that is no more than after stages
of play